Goto

Collaborating Authors

 horizon-free regret


Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret

Neural Information Processing Systems

We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to induce an optimistic SSP problem whose associated value iteration scheme is guaranteed to converge. We prove that EB-SSP achieves the minimax regret rate $\widetilde{O}(B_{\star} \sqrt{S A K})$, where $K$ is the number of episodes, $S$ is the number of states, $A$ is the number of actions and $B_{\star}$ bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of $B_{\star}$, nor of $T_{\star}$, which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of $T_{\star}$ is available) where the regret only contains a logarithmic dependence on $T_{\star}$, thus yielding the first (nearly) horizon-free regret bound beyond the finite-horizon MDP setting.


Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs

Neural Information Processing Systems

In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly.


Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret

Neural Information Processing Systems

We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to induce an optimistic SSP problem whose associated value iteration scheme is guaranteed to converge. We prove that EB-SSP achieves the minimax regret rate \widetilde{O}(B_{\star} \sqrt{S A K}), where K is the number of episodes, S is the number of states, A is the number of actions and B_{\star} bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of B_{\star}, nor of T_{\star}, which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of T_{\star} is available) where the regret only contains a logarithmic dependence on T_{\star}, thus yielding the first (nearly) horizon-free regret bound beyond the finite-horizon MDP setting.


Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret

Neural Information Processing Systems

We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to induce an optimistic SSP problem whose associated value iteration scheme is guaranteed to converge. We prove that EB-SSP achieves the minimax regret rate \widetilde{O}(B_{\star} \sqrt{S A K}), where K is the number of episodes, S is the number of states, A is the number of actions and B_{\star} bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of B_{\star}, nor of T_{\star}, which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of T_{\star} is available) where the regret only contains a logarithmic dependence on T_{\star}, thus yielding the first (nearly) horizon-free regret bound beyond the finite-horizon MDP setting.


Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs

Neural Information Processing Systems

In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve \tilde O(\min\{d\sqrt{K}, d {1.5}\sqrt{\sum_{k 1} K \sigma_k 2}\} d 2) where d is the dimension of the features, K is the time horizon, and \sigma_k 2 is the noise variance at time step k, and \tilde O ignores polylogarithmic dependence, which is a factor of d 3 improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in [0,1], we achieve a horizon-free regret bound of \tilde O(d \sqrt{K} d 2) where d is the number of base models and K is the number of episodes.